By enabling natural-language-to-code translation, recent advances in Large Language Models (LLMs) like Gemini, GPT-4, and Codex have revolutionized human-computer interaction. Despite these advancements, the quality, maintainability, and dependability of generated code are still inconsistent, primarily due to the way prompts are phrased. Prompt sensitivity becomes a major constraint in Low-Code/No-Code (LC/NC) systems, where automation and accessibility are given top priority. The literature on prompt engineering as the primary method for coordinating user intent with executable logic in AI-driven code development is compiled in this review study.
It examines current frameworks for automated refining, contextual enrichment, iterative feedback, and structured prompting [1]–[5].
The study also looks at how these techniques might be integrated into LC/NC ecosystems to produce development workflows that are flexible and suitable for production. We find that systematic rapid engineering, which bridges the semantic gap between executable syntax and natural language, is the primary enabler for deterministic, high-fidelity AI code creation through comparative analysis of works published between 2023 and 2025.
Introduction
The text reviews how Large Language Models (LLMs) have transformed software development by enabling automatic code generation from natural-language instructions, particularly within Low-Code/No-Code (LC/NC) environments. While these models democratize software creation, their outputs are highly sensitive to prompt design. Poorly structured or ambiguous prompts often result in unreliable, insecure, or semantically incorrect code—a critical issue for LC/NC platforms that target non-technical users.
To address this challenge, prompt engineering has emerged as a systematic discipline that studies how structured instructions, contextual information, constraints, and feedback loops influence LLM behavior. The paper surveys key prompt-engineering frameworks such as role-based prompting, context injection, iterative refinement, and automated optimization systems like EPiC and Prompt Alchemy. Empirical studies show these methods can improve code accuracy, modularity, reusability, and security by 18–32% compared to static prompting.
The review highlights persistent research gaps, including the lack of standardized evaluation metrics, limited integration of automated prompt optimization into commercial LC/NC platforms, and insufficient focus on maintainability, security, and ethics of AI-generated code. To address these gaps, the authors propose a unified analytical framework that evaluates prompt engineering across dimensions such as prompt structure, code quality, contextual depth, automation, and LC/NC integration.
Overall, the paper concludes that systematic prompt engineering can effectively substitute for costly model fine-tuning by acting as a linguistic control layer that aligns general-purpose LLMs with domain-specific coding needs. When embedded into LC/NC ecosystems, structured and automated prompt frameworks significantly enhance consistency, reliability, security, and scalability of AI-generated code, making them essential for the future of adaptive, AI-driven software development.
Conclusion
Prompt engineering has matured from an art into a structured science. Its integration into LC/NC platforms revolutionizes how software is conceived, designed, and delivered. Through systematic prompting—combining structure, context, feedback, and automation—AI systems achieve higher fidelity, reliability, and interpretability in code generation.
This review concludes that Systematic Prompt Engineering (SPE) is not merely a quality improvement mechanism but a foundational shift in how AI interprets human intent. By bridging natural language and executable logic, it transforms LC/NC environments into intelligent, adaptive co-development ecosystems. Future work should focus on adaptive reinforcement-driven prompt systems, ethical prompt governance, and cross-model standardization, paving the way toward fully autonomous yet accountable AI-driven software engineering.
References
[1] Xu, T., & Zhang, L. (2025). A Survey on Code Generation with LLM-Based Agents. arXiv preprint arXiv:2501.10345.
[2] Johnson, E., & Taherkhani, F. (2025). Prompt Variability Effects on LLM Code Generation. ResearchGate.
[3] Patel, S., & Kumar, A. (2023). Low-Code/No-Code Platforms: From Concept to Creation. IJERT, 12(4), 55–63.
[4] Rahman, M., & Chen, Y. (2025). Democratizing AI Development in LC/NC Systems. ResearchGate.
[5] Gupta, R., & Lee, D. (2025). Effective Prompt Engineering for AI-Powered Code Generation. Gradiva Review Journal, 11(3), 141–150.
[6] McKnight, J. (2025). Nearly Half of AI-Generated Code Found to Contain Security Flaws. TechRadar.
[7] Tan, X., & Li, H. (2024). Understanding Prompt Sensitivity in Code Synthesis Models. MDPI AI Review, 9(2), 55–72.
[8] Singh, A., & Yu, B. (2025). Improving Software Reliability with LLM-Assisted Testing. IEEE TSE, 51(4), 2245–2259.
[9] Al Khalil, R., & Santos, P. (2025). Prompt Engineering Review: Mapping Methods and Metrics. MDPI JAI, 14(1), 25–39.
[10] Wei, J., et al. (2023). Chain-of-Thought Prompting in LLMs. arXiv preprint arXiv:2201.11903.
[11] Li, Z., & Zhao, Y. (2025). EPiC: Automated Prompt Engineering for Code Generation. arXiv preprint arXiv:2503.10789.
[12] Harris, K., & Brown, J. (2025). Prompt Alchemy: Automatic Prompt Refinement. arXiv preprint arXiv:2502.11520.
[13] Choi, D., & Nakamura, H. (2024). Context-Driven Prompt Optimization for Reliable Code Generation. ScienceDirect.
[14] Flores, E., & Martin, P. (2025). Role-Based Prompts for Context-Sensitive Code Synthesis. ACM AI Programming Systems.
[15] Jain, N., & Patel, R. (2025). Citizen Development and AI in Low-Code Platforms. IEEE Computer, 58(3), 33–42.
[16] Müller, S., & Dutta, R. (2024). Adaptive Prompt Systems for No-Code Automation. Springer AI Systems, 13(2), 177–192.
[17] Oliveira, J., & Singh, A. (2025). Enhancing Software Design with LLM Integration. Elsevier AI Tools Journal.
[18] Chen, R., & Alvarez, L. (2025). AI Safety in Autonomous Code Generation. Nature Machine Intelligence, 7(5), 410–423.
[19] Tanaka, Y., & Park, S. (2025). Dynamic Prompt Feedback Loops for LLM Code Generation. arXiv preprint arXiv:2504.10965.